Multi-Task Learning for Spoken Language Understanding with Shared Slots
نویسندگان
چکیده
This paper addresses the problem of learning multiple spoken language understanding (SLU) tasks that have overlapping sets of slots. In such a scenario, it is possible to achieve better slot filling performance by learning multiple tasks simultaneously, as opposed to learning them independently. We focus on presenting a number of simple multi-task learning algorithms for slot filling systems based on semi-Markov CRFs, assuming the knowledge of shared slots. Furthermore, we discuss an intradomain clustering method that automatically discovers shared slots from training data. The effectiveness of our proposed approaches is demonstrated in an SLU application that involves three different yet related tasks.
منابع مشابه
A multiple classifier-based concept-spotting approach for robust spoken language understanding
In this paper, we present a concept spotting approach using manifold machine learning techniques for robust spoken language understanding. The goal of this approach is to find proper values for pre-defined slots of given meaning representation. Especially we propose a voting-based selection using multiple classifiers for robust spoken language understanding. This approach proposes no full level...
متن کاملThe Impact of Language Learning Activities on the Spoken Language Development of 5-6-Year-Old Children in Private Preschool Centers of Langroud
The Impact of Language Learning Activities on the Spoken Language Development of 5-6-Year-Old Children in Private Preschool Centers of Langroud N. Bagheri, M.A. E. Abbasi, Ph.D. M. GeramiPour, Ph.D. The present study was conducted to investigate the impact of language learning activities on development of spoken language in 5-6-year-old children at private preschool center...
متن کاملContext Memory Networks for Multi-objective Semantic Parsing in Conversational Understanding
The end-to-end multi-domain and multi-task learning of the full semantic frame of user utterances (i.e., domain and intent classes and slots in utterances) have recently emerged as a new paradigm in spoken language understanding. An advantage of the joint optimization of these semantic frames is that the data and feature representations learnt by the model are shared across different tasks (e.g...
متن کاملMulti-Domain Adversarial Learning for Slot Filling in Spoken Language Understanding
The goal of this paper is to learn cross-domain representations for slot filling task in spoken language understanding (SLU). Most of the recently published SLU models are domain-specific ones that work on individual task domains. Annotating data for each individual task domain is both financially costly and non-scalable. In this work, we propose an adversarial training method in learning commo...
متن کاملDeep contextual language understanding in spoken dialogue systems
We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011